Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #89 +/- ##
==========================================
+ Coverage 99.49% 99.52% +0.03%
==========================================
Files 11 11
Lines 981 1044 +63
==========================================
+ Hits 976 1039 +63
Misses 5 5
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
JMorado
left a comment
There was a problem hiding this comment.
I'm still getting a familiar with the codebase, but overall LGTM. Left a few minor comments. Thanks!
| # make sure we have vsites in the force field | ||
| assert ff.v_sites is not None | ||
| # this is awkward to specify in the yaml config file can we make it easier? | ||
| expected_ids = ["[#1:2]-[#8X2H2+0:1]-[#1:3] EP once"] |
There was a problem hiding this comment.
This id looks weird indeed. Can't it be simplify specified using ""[#1:2]-[#8X2H2+0:1]-[#1:3] EP"?
There was a problem hiding this comment.
"once" is added here https://github.com/openforcefield/openff-interchange/blob/c2f82bb4d4beceef80deda257155bec5c5b038a0/openff/interchange/smirnoff/_virtual_sites.py#L122 (as it matches once) so the ID ends up being as displayed. I can't think of a way to get round this without making the expected id less precise, but let me know if you have any thoughts!
| values = trainable.to_values().detach() | ||
| # set the distance to outside the clamp region | ||
| values[-1] = 0.0 | ||
| ff = trainable.to_force_field(values) | ||
| assert torch.allclose( | ||
| ff.v_sites.parameters[0], | ||
| torch.tensor([-0.0100, 3.1416, 0.0000], dtype=torch.float64), |
There was a problem hiding this comment.
I'm confused by this bit -- you set the distance of the last value to outside the clamp region, but then it's the first value that gets clamped to -0.01?
There was a problem hiding this comment.
I've added a comment to clarify.
| lower, upper = config.limits.get(col, (None, None)) | ||
| clamp_lower.append(-torch.inf if lower is None else lower) | ||
| clamp_upper.append(torch.inf if upper is None else upper) | ||
|
|
There was a problem hiding this comment.
Do we need scales and clamp limits for frozen values?
There was a problem hiding this comment.
No, just going off the previous implementation, but I agree without would be better. I've refactored to avoid keeping track of frozen values.
descent/train.py
Outdated
| self._scales = torch.cat(scales)[self._unfrozen_idxs] | ||
|
|
||
| self._scales = torch.cat([param_scales, attr_scales])[self._unfrozen_idxs] | ||
| self._clamp_lower = torch.cat(clamp_lower)[self._unfrozen_idxs] | ||
| self._clamp_upper = torch.cat(clamp_upper)[self._unfrozen_idxs] |
There was a problem hiding this comment.
If _prepare_values would return scales and clamp limits only for unfrozen values, there wouldn't be a need to slice here. I'm not a fan of offsetting with len(param_scales) + len(attr_scales))) at it seems a bit fragile.
There was a problem hiding this comment.
Thanks, yes agree this feels brittle. I've refactored to avoid this -- I now do:
offset = 0
for block in blocks:
all_values.append(block.values)
all_unfrozen_idxs.append(block.unfrozen_idxs + offset)
all_scales.append(block.scales)
all_clamp_lower.append(block.clamp_lower)
all_clamp_upper.append(block.clamp_upper)
all_regularized_idxs.append(block.regularized_idxs + offset)
all_regularization_weights.append(block.regularization_weights)
offset += len(block.values)
|
This just came up at our iteration planning - Since OpenFF is the current code owner here (so to keep merging from being ambiguous) I'm assigning Lily just so she can help coordinate merging this after the review, but I'll assume this is waiting on Joao's review before that. |
|
Thanks for the review @JMorado -- I think I've addressed all of your comments and have added a fairly large refactor of train.py which avoids keeping track of frozen values and trys to make the offsetting less brittle. |
Description
This updates #73 to be compatible with recent updates to how regularisation is done, and also addresses the review comments I left there.
@JMorado you'll probably be the first to use this code, so would you be happy to briefly review? Thanks
Status